• Thursday, September 5, 2024

    Luma AI's latest video generator, Dream Machine 1.6, introduces precise tools that allow users to apply 12 camera motions through text prompts. This innovation addresses a significant limitation in AI video generation by offering better control over the resulting clips. Positive feedback from early adopters highlight the considerable upgrade in dynamic motion and user-friendly options. Competing with Runway's Gen-2 and OpenAI's Sora, this advancement demonstrates rapid progress toward achieving near-traditional film quality with AI.

  • Monday, August 26, 2024

    Luma Labs' Dream Machine AI video generator version 1.5 boasts improved realism, better text rendering, and faster video generation, which could benefit creators working under tight deadlines. The new model can create five seconds of high-quality video in just two minutes, offering more rapid turnaround times. AI video technology continues to improve and is set to have a major impact on industries such as entertainment, advertising, and education.

  • Tuesday, August 13, 2024

    Flux AI, by Black Forest Labs, has emerged as the latest promising open-source AI image generation tool. It is capable of running on consumer-grade laptops. It excels in rendering people and prompt adherence, outperforming competitors like Midjourney in some aspects. The model is available in Pro, Dev, and Schnell versions, with a forthcoming text-to-video model announced as open-source as well.

    Hi Impact
  • Monday, August 26, 2024

    Adobe has introduced "Magic Fixup," an AI model leveraging video data to revolutionize photo editing. It was trained on millions of video frames, enabling it to understand object and scene changes under varied conditions and make sophisticated edits while preserving fine details. The technology could drastically enhance workflows across industries such as advertising, filmmaking, and content creation, though it raises ethical concerns about image authenticity.

  • Friday, September 20, 2024

    Generative AI tools are coming to YouTube through Veo, a video generation model from DeepMind that enables creators to generate six-second Shorts clips and backgrounds with text prompts. AI-generated content will be watermarked with SynthID. Other AI updates include tools for titles, thumbnails, and video ideas. Creators persist in their concerns about AI content's impact on originality.

  • Tuesday, April 16, 2024

    Adobe is developing an AI model to generate video, which will be integrated into Premiere Pro later this year. It will feature object addition, removal, and generative extend capabilities. Adobe is collaborating with third-party vendors and addressing deepfake concerns through Content Credentials.

  • Tuesday, June 11, 2024

    AI image generation has progressed from creating images based on text descriptions since 2022. Using a child's game analogy, this article explains how these models refine noisy inputs to produce detailed and specific images, showcasing the rapid advancements and potential of AI in visual creativity.

  • Thursday, June 20, 2024

    Snap unveiled a real-time, on-device image diffusion model for creating vivid AR experiences and introduced new generative AI tools for AR creators at the Augmented World Expo. These enhancements, which include a new Lens Studio 5.0 with an AI assistant, aim to make AR content creation faster and more efficient with features like realistic ML face effects, 3D asset generation, and character creation using text prompts.

  • Monday, July 29, 2024

    Stability AI has unveiled Stable Video 4D, a novel generative AI model built on its existing Stable Video Diffusion and Stable Video 3D models. Unlike traditional 3D models, Stable Video 4D produces videos from 8 different perspectives over time using dynamic object datasets. This cutting-edge technology is expected to be influential in sectors like movie production, gaming, and AR/VR.

  • Thursday, March 14, 2024

    Google DeepMind's SIMA is a generalist AI agent for 3D games that follows natural-language instructions across various video game environments, marking a shift towards creating versatile, instructable AI systems.

  • Tuesday, September 24, 2024

    Amazon has introduced two new generative AI tools for advertisers: Video Generator and 'live images'. These tools aim to produce visuals quickly and affordably. Video Generator transforms a single product image into 6- to 9-second videos with customizable elements and creates four variations. In contrast, the 'live image' feature generates animated GIFs from still images. These tools are designed to meet advertiser demands for cost-effective video marketing and are currently in beta for select U.S. advertisers.

  • Monday, August 19, 2024

    Runway ML's new Gen-3 Alpha Turbo is now available, offering 7x faster AI video generation at half the price of its predecessor across various subscription plans, including free trials. This speed advancement significantly reduces time lag, fostering more efficient workflows, particularly for time-sensitive industries. Runway continues to push for improvements, including enhanced control mechanisms, while navigating the ethical landscape surrounding AI training data practices.

  • Friday, June 7, 2024

    The Together AI team has a novel VLM that excels at extremely high resolution images due to its efficient architecture.

  • Tuesday, April 23, 2024

    Adobe is integrating generative AI video tools into Premiere Pro, including new capabilities for shot extension, object addition/removal, and text-to-video features, streamlined by its Firefly video model. The updates, including a technology preview and general availability of AI-powered audio workflows, aim to enhance video production efficiency and creativity.

  • Friday, May 24, 2024

    Generate an AI video from keyframe images based on users' unique styles, concepts, or products.

  • Friday, September 20, 2024

    Snapchat has announced a new AI video-generation tool for select creators that enables video creation from text and soon image prompts. The tool, powered by Snap's foundational video models, will be available in beta on the web. Snap aims to compete with companies like OpenAI and Adobe but has not shared output examples yet.

  • Wednesday, October 2, 2024

    Pika Labs has recently announced the launch of Pika 1.5, an upgraded version of their platform that enhances user experience with more realistic movement, impressive visuals, and innovative features known as Pikaffects, which challenge conventional physics. This update aims to provide users with an even more engaging and enjoyable experience. In addition to the new features, Pika Labs is also promoting their idea-to-video platform, Pika 1.0, which allows users to create and edit videos using artificial intelligence. This platform is now available to new users on both web and Discord, encouraging creativity and ease of use. Interested individuals can sign up at pika.art to explore these capabilities. Pika Labs has shared their journey and vision through a blog post, inviting users to learn more about their development and future plans. They have also celebrated significant fundraising achievements, having raised $55 million through various funding rounds. Notable investors include prominent figures in the tech industry, and the company has received guidance from academic advisors from prestigious institutions like Stanford and Harvard. The team at Pika Labs expresses gratitude towards their users, investors, and advisors for their ongoing support, which has been crucial in their growth and development. They encourage the community to stay connected and informed about new updates and features as they continue to innovate in the realm of video creation and editing.

  • Tuesday, June 18, 2024

    Runway has trained an extremely powerful new video generation model. It will power many of the existing features on its platform. Examples are available in the provided link.

  • Monday, April 15, 2024

    xAI has announced that its latest flagship model has vision capabilities on par with (and in some cases exceeding) state-of-the-art models.

    Hi Impact
  • Wednesday, July 3, 2024

    Captions, a video editing app backed by a16z, Kleiner Perkins, and Sequoia Capital, has launched a feature that automates video editing with custom graphics, zooms, music, and more using AI. This tool targets vertical videos of a single person but also allows users to create AI-generated videos from prompts, which can then be edited with the same features.

  • Thursday, September 26, 2024

    YouTube has announced a significant integration of Google DeepMind's AI video generation model, Veo, into its platform, specifically targeting YouTube Shorts. This integration was highlighted during YouTube's Made On YouTube event, emphasizing the growing role of artificial intelligence in content creation. Veo, which was introduced at Google’s I/O 2024 developer conference, is designed to compete with other AI video generation tools like OpenAI’s Sora and Runway. It allows creators to produce high-quality 1080p video clips in various cinematic styles, enhancing the creative possibilities for users. The integration of Veo into YouTube Shorts is seen as a major upgrade from the existing "Dream Screen" feature, which was launched in 2023. Dream Screen enabled creators to generate backgrounds using text prompts, but Veo aims to take this a step further by allowing the editing and remixing of previously generated footage. This new capability will enable creators to generate six-second standalone video clips, which can serve as filler scenes to enhance storytelling and provide smoother transitions in their videos. In addition to the Veo integration, YouTube is rolling out several new features. These include "Jewels," a digital item that viewers can send during livestreams, similar to TikTok's gifting system, aimed at increasing viewer interaction. YouTube is also expanding its automatic dubbing tool to support more languages and is testing a feature that captures a creator's tone and intonation in dubbed audio, enhancing the naturalness of the experience. Furthermore, YouTube is enhancing its Community hubs, allowing for better interaction between creators and their followers, and expanding its "hyping" feature to more markets. This feature allows users to express support for their favorite creators, with videos receiving the most hype points being showcased on a leaderboard. Creators will also have access to AI tools for brainstorming video ideas, generating thumbnails, and responding to comments, further streamlining the content creation process. Overall, these updates reflect YouTube's commitment to leveraging AI technology to enhance user experience and empower creators, positioning itself competitively in the evolving landscape of digital content creation.

  • Thursday, September 19, 2024

    Together AI and Meta have partnered to make a tool that enables users to build entire apps from just a prompt on the LlamaCoder platform. It is akin to Claude Artifacts but purely created to show off the speed of Together AI's inference engine.

  • Thursday, April 4, 2024

    DALL-E images can now be modified using a new editor interface from OpenAI that lets users describe changes using text prompts. Users can use the new select button to give specific instructions for a particular part of an image. Alternatively, users can make general changes to the image by entering a prompt in the chat sidebar.

    Hi Impact
  • Tuesday, April 16, 2024

    Adobe is set to introduce generative AI tools into Premiere Pro, including features like Generative Extend, Object Addition and Removal, and Text to Video. It aims to enhance the editing process by seamlessly integrating AI technology directly into the platform, with a focus on providing real-time solutions and streamlining workflows for video editors.

  • Thursday, March 21, 2024

    OpenAI's Sora model for AI-generated video builds on diffusion models instead of operating on raw pixels. It works in a compressed 'latent space' and uses the Transformer architecture. Like with large language models, throwing more compute at Sora yields better results. Sora is good enough for real-world use, but it is expensive as the inference cost could require hundreds of thousands of GPUs at its peak.

  • Wednesday, September 25, 2024

    Google Photos is introducing an AI-powered video editor for Android and iOS that features new tools like a "Speed" tool for slow-motion or fast-forward effects and an "Auto Enhance" button for color and stability improvements. AI presets enable automatic cropping, speed control, lighting adjustments, and effects such as zooming and dynamic motion tracking.

    Hi Impact
  • Wednesday, September 18, 2024

    Snapchat is enhancing its AI tools for creators. It introduced Easy Lens at the Snap Partner Summit, a feature that converts plain-English descriptions into augmented reality (AR) Lenses using generative AI through Lens Studio. It also announced Body Morph for generating 3D characters and outfits and new functionalities like icon creation and Bitmoji animations.

  • Tuesday, July 9, 2024

    DeepMind and Harvard University developed a virtual rat with AI neural networks trained on actual rat movements and neural patterns to probe the brain circuits responsible for complex motor skills. This bio-inspired AI has the capacity to generalize learned movement skills to new environments, offering insights into brain function and advancing robotics. The research demonstrates that digital simulations can effectively mimic and decode neural activity related to different behaviors.

  • Thursday, April 4, 2024

    Observability costs are on top of mind for many organizations. This post covers a brief history of modern observability, how the industry arrived at its current observability cost crisis, and a new way of thinking about observability costs and pricing models. Over the past few decades, infrastructure-as-a-service providers and open source have made it easy to produce voluminous amounts of telemetry, while little regard has been given to the cost or storage of the telemetry output, causing a cost crisis in observability tooling. One solution may be to not send any observability data by default.

    Hi Impact
  • Thursday, April 4, 2024

    Googling 'File System API' will bring up a lot of similarly named APIs. This can get confusing as there are multiple standards and names, but there are actually fewer than it seems at first and many build on each other, adding layers of functionality. This article gives a tour of the different file system APIs. Interacting with file systems is complex - the different APIs handle different actions and security concerns.

    Md Impact